In this project, my intention is to find out if using median returns in portfolio optimization could lead to better portfolio performance compared to mean returns. The use of mean returns might not accurately describe expected returns due to the non-normal distribution of financial returns. The mean is also known to be affected by outliers, whereas the median tend to be a more robust measure of central tendency.
The results of this project found that using the median return to find the optimal portfolio did not lead to significant improvement in portfolio performance than using the mean return. However, this should not be taken as evidence against the median return as the project was flawed in its own ways. Firstly, the portfolios optimized only used a selected number of stocks, which means that the results are not generalizable to any portfolio. Secondly, the number of random portfolios used to simulate the performance of hypothetical portfolios was only a small subset of a large possible set of permutations, decreasing the likelihood of finding the true optimal portfolio.
library(doParallel) # For parallel computation in foreach loops
library(PortfolioAnalytics) # For portfolio optimization and analysis
library(RColorBrewer) # For color palettes in plots
library(tidyquant) # For quantmod and PerformanceAnalytics functions
library(tidyverse) # For dplyr and ggplot2 functions (data manipulation and plotting)I retrieved the weekly adjusted closing prices of 10 stocks from Yahoo Finance, starting January 2015 to June 2022. The weekly returns are then calculated using the discrete/simple method to be used in calculating portfolio returns.
The 10 stocks used in this project are Procter & Gamble (PG), Walmart (WMT), Booking Holdings (BKNG), Salesforce (CRM), 3M (MMM), Starbucks (SBUX), Walt Disney (DIS), Home Depot (HD), Coca-Cola (KO), and NVIDIA (NVDA).
The weekly adjusted closing price of the tickers can be retrieved
using quantmod::getSymbols().
# Vector of tickers to include in portfolio
tickers <- c("PG", "WMT", "BKNG", "CRM", "MMM", "SBUX", "DIS", "HD", "KO", "NVDA")
startdate <- as.Date("2015-01-01")
enddate <- as.Date("2022-07-01")
price_data <- NULL
# Loop to get adjusted closing prices for all stocks
for(t in tickers) {
price_data <- cbind(price_data,
quantmod::getSymbols(Symbols = t, src = "yahoo", auto.assign = FALSE,
from = startdate, to = enddate, periodicity = "weekly") %>% Ad())
}
# Check dimension of object, start and end date of data collected
dim(price_data); start(price_data); end(price_data)## [1] 392 10
## [1] "2015-01-01"
## [1] "2022-06-30"
# See first 6 observations in price_data
head(price_data)## PG.Adjusted WMT.Adjusted BKNG.Adjusted CRM.Adjusted MMM.Adjusted
## 2015-01-01 71.96553 74.80490 1069.57 56.93 126.6301
## 2015-01-08 71.90160 73.12472 1035.67 57.25 126.6618
## 2015-01-15 72.49287 73.15007 1045.53 57.71 128.8014
## 2015-01-22 68.51836 73.30204 1003.25 56.35 129.9107
## 2015-01-29 69.03337 73.15851 1028.28 58.99 130.6001
## 2015-02-05 68.91265 72.89676 1060.06 58.80 130.3387
## SBUX.Adjusted DIS.Adjusted HD.Adjusted KO.Adjusted NVDA.Adjusted
## 2015-01-01 35.21173 86.51881 88.14967 33.73298 4.599706
## 2015-01-08 34.88208 87.82362 86.65532 33.39557 4.743898
## 2015-01-15 35.25945 87.76771 87.40668 34.02329 4.878477
## 2015-01-22 37.98340 86.36969 88.42825 32.89337 4.640560
## 2015-01-29 38.47352 94.39432 91.52673 32.70506 4.844833
## 2015-02-05 39.52381 94.94420 93.13078 33.24647 5.001040
Discrete returns can be calculated using
PerformanceAnalytics::Return.calculate().
return_data <- na.omit(PerformanceAnalytics::Return.calculate(prices = price_data, method = "discrete")) %>%
`colnames<-`(paste(tickers, "Return", sep = "."))
dim(return_data); head(return_data)## [1] 391 10
## PG.Return WMT.Return BKNG.Return CRM.Return MMM.Return
## 2015-01-08 -0.0008883975 -0.0224607743 -0.031694890 0.005620938 0.000250217
## 2015-01-15 0.0082233362 0.0003465996 0.009520392 0.008034917 0.016892193
## 2015-01-22 -0.0548262355 0.0020775100 -0.040438847 -0.023566124 0.008612952
## 2015-01-29 0.0075164530 -0.0019580901 0.024948945 0.046850117 0.005306891
## 2015-02-05 -0.0017487195 -0.0035778340 0.030906007 -0.003220936 -0.002001866
## 2015-02-12 0.0072398463 -0.0005791890 0.059364496 0.071428590 0.024083231
## SBUX.Return DIS.Return HD.Return KO.Return NVDA.Return
## 2015-01-08 -0.009361964 0.0150812285 -0.016952384 -0.010002259 0.03134809
## 2015-01-15 0.010818304 -0.0006366055 0.008670743 0.018796384 0.02836886
## 2015-01-22 0.077254355 -0.0159286931 0.011687527 -0.033210223 -0.04876870
## 2015-01-29 0.012903718 0.0929102327 0.035039401 -0.005724832 0.04401904
## 2015-02-05 0.027298930 0.0058253612 0.017525559 0.016554503 0.03224198
## 2015-02-12 0.024341809 0.0195348533 0.015229905 -0.015105150 0.06295051
The (arithmetic) mean and median are two measures of central tendency that can be used to describe expected returns of a stock.
Since the distribution of stock returns tend to be non-normal, the
median may be a more appropriate measure. For example, the density plot
of PG shows that the returns do not follow a normal
distribution. The Q-Q plot on the top-left corner also indicated a
non-normal distribution.
chart.Histogram(R = return_data$PG.Return,
method = c("add.density", "add.normal", "add.qqplot"),
main = "Density Plot of PG Historical Returns")
legend(x = "topright", legend = c("Density Plot", "Normal Distribution"), lwd = 2, col = c("darkblue", "blue"))The mean and median returns of each stock in the portfolio are:
stock_means <- apply(X = return_data, MARGIN = 2, FUN = mean)
stock_medians <- apply(X = return_data, MARGIN = 2, FUN = median)
data.frame(rbind(Mean = stock_means, Median = stock_medians))Although the mean and median return values are quite small, we can see a noticeable difference for some of the stocks.
The variance measures the squared deviation of returns from the mean, but standard deviation (SD) is used since it is in the same units as returns.
The SD of each stock in the portfolio is:
stock_sd <- apply(X = return_data, MARGIN = 2, FUN = sd)
data.frame(rbind(SD = stock_sd))If returns were normally distributed, we can use the 68-95-99 rule, where 68%/95%/99% of returns are within 1SD/2SD/3SD of the mean.
The mean absolute deviation (MAD) is the mean of the absolute deviations between returns and a central point. The central point usually refers to the mean, but the median can be used as well. However, since the mean and median returns are similar, we can expect that the MAD using either of these as the central point would be
\(MAD = \frac{1}{N}\sum_{i=1}^N |R_i - m(R)|\), where \(R_i\) is the return of a stock, and \(m(R)\) is the mean or median return of the stock.
The MAD around the mean and median of each stock in the portfolio are:
stock_MAD <- apply(X = return_data, MARGIN = 2, FUN = function(x) {
mean(abs(x - mean(x)))
})
stock_MADmed <- apply(X = return_data, MARGIN = 2, FUN = function(x) {
mean(abs(x - median(x)))
})
data.frame(rbind(MAD_mean = stock_MAD, MAD_median = stock_MADmed))We can see that the MAD of each stock is smaller than its SD as SD places more weight on outliers than MAD. The MAD calculated with the mean and median are quite similar. I would stick with using MAD around the mean as a measure of risk.
The median absolute deviation (also abbreviated as MAD, but for the purpose of distinguishing the two measures, I used MeAD instead) is the median of the absolute deviations between returns and its median. It is a robust measure of variability.
\(MeAD = med |R_i - med(R)|\), where \(R_i\) is the return of a stock, and \(med(R)\) is the median return of the stock.
The MeAD of each stock in the portfolio is:
stock_MeAD <- apply(X = return_data, MARGIN = 2, FUN = function(x) {
median(abs(x - median(x)))
})
data.frame(rbind(MeAD = stock_MeAD))The different return and risk measures based on mean and median are summarized below:
data.frame(rbind(Mean = stock_means, Median = stock_medians,
SD = stock_sd, MAD_mean = stock_MAD, MAD_median = stock_MADmed, MeAD = stock_MeAD))Portfolio optimization using mean returns and variance/SD and MAD as risk measures are common and a simple Google search would return many research papers and articles on it. Using median returns instead of mean returns have also been widely researched. However, the use of MeAD as a risk measure in portfolio optimization does not seem to be documented. The project tested MeAD as part of the research, but it may be a spurious measure of risk as no statistical tests or simulations to find its significance were carried out.
Optimization based on median measures are usually not implemented by packages and functions, so I had to use a set of random hypothetical portfolios in this project.
Before optimizing any objectives, I generated a set of random portfolios which satisfy constraints where the sum of the component weights must be equal to 1 and the weight of each component is between 0% and 25% of the portfolio.
portspec <- PortfolioAnalytics::portfolio.spec(assets = tickers)
# Sum of weights constrained to 1, can also specify as type = "full investment"
portspec <- PortfolioAnalytics::add.constraint(portfolio = portspec,
type = "weight_sum",
min_sum = 1, max_sum = 1)
# Weight of each portfolio component can vary between minimum of 0% and maximum of 25%
portspec <- PortfolioAnalytics::add.constraint(portfolio = portspec,
type="box",
min = 0, max = 0.25)
portspec## **************************************************
## PortfolioAnalytics Portfolio Specification
## **************************************************
##
## Call:
## PortfolioAnalytics::portfolio.spec(assets = tickers)
##
## Number of assets: 10
## Asset Names
## [1] "PG" "WMT" "BKNG" "CRM" "MMM" "SBUX" "DIS" "HD" "KO" "NVDA"
##
## Constraints
## Enabled constraint types
## - weight_sum
## - box
set.seed(43594)
rand_port <- PortfolioAnalytics::random_portfolios(portfolio = portspec,
permutations = 20000,
rp_method = "sample",
eliminate = TRUE)
dim(rand_port); head(rand_port)## [1] 1740 10
## PG WMT BKNG CRM MMM SBUX DIS HD KO NVDA
## [1,] 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100
## [2,] 0.116 0.036 0.186 0.206 0.014 0.166 0.050 0.068 0.000 0.158
## [3,] 0.118 0.132 0.148 0.062 0.244 0.194 0.002 0.000 0.048 0.052
## [4,] 0.000 0.170 0.088 0.170 0.168 0.034 0.136 0.092 0.024 0.118
## [5,] 0.202 0.006 0.140 0.038 0.022 0.100 0.080 0.094 0.072 0.246
## [6,] 0.180 0.188 0.056 0.056 0.120 0.124 0.000 0.210 0.010 0.056
1,740 portfolio permutations were found, which will be used for the rest of this project.
I used three different optimization objectives in this project:
To make the different strategies more practical, I attempted to replicate a half-yearly re-optimization strategy using previous one year data. In this case, I would calculate the return and/or risk of the hypothetical portfolios with 2015 return data and implement the optimal weights on 2016H1. Then, I re-optimize the weights using data from 2015H2 to 2016H1 for 2016H2 and so on. The last re-optimization used 2021 data for 2022H1 since I only retrieved data up to 30 June 2022.
opt_periods <- c("2015", "2015-07/2016-06",
"2016", "2016-07/2017-06",
"2017", "2017-07/2018-06",
"2018", "2018-07/2019-06",
"2019", "2019-07/2020-06",
"2020", "2020-07/2021-06",
"2021")
ret_periods <- c("2016-01/2016-06", "2016-07/2016-12",
"2017-01/2017-06", "2017-07/2017-12",
"2018-01/2018-06", "2018-07/2018-12",
"2019-01/2019-06", "2019-07/2019-12",
"2020-01/2020-06", "2020-07/2020-12",
"2021-01/2021-06", "2021-07/2021-12",
"2022-01/2022-06")
data.frame(cbind(Optimization_Period = opt_periods, Return_Period = ret_periods))The weekly returns of hypothetical portfolios can be calculated using
opt_periods and randport and is based on the
formula \(R_p = \sum_{i=1}^N R_i w_i\).
I used geometric chaining to aggregate returns and rebalanced the
portfolios quarterly. We can use the performance of random portfolios in
each optimization period to select portfolio weights which optimized
that period’s return and/or risk.
rp_returns <- foreach(i = 1:nrow(rand_port), .combine = "cbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data,
weights = rand_port[i, ],
geometric = TRUE,
rebalance_on = "quarters")
}I also calculate the returns of an equal-weight portfolio to compare the performance of optimized portfolios.
# If do not include weights, equal weight portfolio is assumed
ewp_return <- PerformanceAnalytics::Return.portfolio(R = return_data,
geometric = TRUE,
rebalance_on = "quarters")In this section, I find portfolios that maximize mean and median returns in each optimization period, although these types of objectives may not be practical in portfolio optimization. It assumes that investors create their portfolios based on the best historical returns. However, these strategies can give an idea of the risks taken by an investor, based on the drawdown of the portfolios.
Find weights that maximizes mean portfolio returns in each optimization period:
maxmean_weight <- foreach(i = opt_periods, .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Mean.arithmetic(x = rp_returns[i, ])
opt_weight <- rand_port[which.max(tmp), ]
}
rownames(maxmean_weight) <- paste("OP", 1:nrow(maxmean_weight), sep = "")
data.frame(maxmean_weight)Calculate return based on the selected portfolio weights in the return period:
# Returns of best mean portfolio in ret_period using weights from opt_period
maxmean_returns <- foreach(i = ret_periods, j = 1:nrow(maxmean_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = maxmean_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that maximizes median portfolio returns in each optimization period:
# Find weights that maximizes median of each optimization period
maxmed_weight <- foreach(i = opt_periods, .combine = "rbind") %do% {
tmp <- apply(X = rp_returns[i, ], MARGIN = 2, FUN = median)
opt_weight <- rand_port[which.max(tmp), ]
}
rownames(maxmed_weight) <- paste("OP", 1:nrow(maxmed_weight), sep = "")
data.frame(maxmed_weight)Calculate return based on the selected portfolio weights in the return period:
# Returns of best median portfolio in ret_period using weights from opt_period
maxmed_returns <- foreach(i = ret_periods, j = 1:nrow(maxmed_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = maxmed_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Plot weights of best mean and best median portfolios:
chart.StackedBar(w = maxmean_weight, colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = "Optimal Weights of Best Mean Portfolio", ylab = "Weight")chart.StackedBar(w = maxmed_weight, colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = "Optimal Weights of Best Median Portfolio", ylab = "Weight")Plot cumulative return of best mean and best median portfolios against equal-weight portfolio return:
best_return <- cbind(maxmean_returns, maxmed_returns, ewp_return["2016/",]) %>%
`colnames<-`(c("Best_Mean", "Best_Median", "Equal Weight"))
chart.CumReturns(R = best_return, geometric = TRUE,
legend.loc = "topleft",
main = "Cumulative Return of Best Return Portfolios")chart.Drawdown(R = best_return, geometric = TRUE,
legend.loc = "bottomleft",
main = "Drawdown of Best Return Portfolios")Tables of annualized returns, risk measures and statistics:
table.AnnualizedReturns(R = best_return, scale = 52, Rf = 0.025/52, geometric = TRUE)table.DownsideRisk(R = best_return, scale = 52, Rf = 0.025/52, MAR = 0.07/52)table.Stats(R = best_return)By choosing weights that maximize the median portfolio returns in each optimization period instead of mean portfolio returns, a higher annualized return was achieved, but it came with higher annualized standard deviation. The mean returns over the period was also higher for the Best Median portfolio, with a higher historical expected shortfall and maximum drawdown.
On the opposite spectrum, I find portfolios that minimizes risk in each optimization period in this section.
Find weights that minimizes variance/standard deviation in each optimization period:
minstd_weight <- foreach(i = opt_periods, .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::StdDev(R = rp_returns[i, ])
opt_weight <- rand_port[which.min(tmp), ]
}
rownames(minstd_weight) <- paste("OP", 1:nrow(minstd_weight), sep = "")
data.frame(minstd_weight)Calculate return based on the selected portfolio weights in the return period:
minstd_returns <- foreach(i = ret_periods, j = 1:nrow(minstd_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = minstd_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MAD in each optimization period:
minmad_weight <- foreach(i = opt_periods, .combine = "rbind") %do% {
tmp <- apply(X = rp_returns[i, ], MARGIN = 2, FUN = function(x) {
mean(abs(x - mean(x)))
})
opt_weight <- rand_port[which.min(tmp), ]
}
rownames(minmad_weight) <- paste("OP", 1:nrow(minmad_weight), sep = "")
data.frame(minmad_weight)Calculate return based on the selected portfolio weights in the return period:
minmad_returns <- foreach(i = ret_periods, j = 1:nrow(minmad_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = minmad_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MeAD in each optimization period:
minmead_weight <- foreach(i = opt_periods, .combine = "rbind") %do% {
tmp <- apply(X = rp_returns[i, ], MARGIN = 2, FUN = function(x) {
median(abs(x - median(x)))
})
opt_weight <- rand_port[which.min(tmp), ]
}
rownames(minmead_weight) <- paste("OP", 1:nrow(minmead_weight), sep = "")
data.frame(minmead_weight)Calculate return based on the selected portfolio weights in the return period:
minmead_returns <- foreach(i = ret_periods, j = 1:nrow(minmead_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = minmead_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Plot weights of minimum risk portfolios:
chart.StackedBar(w = minstd_weight, colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = "Optimal Weights of Minimum Variance Portfolio", ylab = "Weight")chart.StackedBar(w = minmad_weight, colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = "Optimal Weights of Minimum MAD Portfolio", ylab = "Weight")chart.StackedBar(w = minmead_weight, colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = "Optimal Weights of Minimum MeAD Portfolio", ylab = "Weight")Plot cumulative return of minimum risk portfolios against equal-weight portfolio return:
min_risk <- cbind(minstd_returns, minmad_returns, minmead_returns, ewp_return["2016/",]) %>%
`colnames<-`(c("Min_Var", "Min_MAD", "Min_MeAD", "Equal Weight"))
chart.CumReturns(R = min_risk, geometric = TRUE,
legend.loc = "topleft",
main = "Cumulative Return of Minimum Risk Portfolios")chart.Drawdown(R = min_risk, geometric = TRUE,
legend.loc = "bottomleft",
main = "Drawdown of Minimum Risk Portfolios")Tables of annualized returns, risk measures and statistics:
table.AnnualizedReturns(R = min_risk, scale = 52, Rf = 0.025/52, geometric = TRUE)table.DownsideRisk(R = min_risk, scale = 52, Rf = 0.025/52, MAR = 0.07/52)table.Stats(R = min_risk)The equal-weight portfolio had the highest annualized return and standard deviation. Focusing on the minimum risk portfolios, the minimum MAD portfolio had the worst performance as it had the lowest returns with relatively high standard deviation and maximum drawdown.
In this section, I select random portfolios that have mean or median return above the equal-weight portfolio mean or median return in the optimization period and find the portfolio that has the minimum risk in that subset of random portfolios.
This means that 6 portfolios are created, namely the Mean-Variance, Mean-MAD, Mean-MeAD, Median-Variance, Median-MAD and Median-MeAD portfolios.
First of all, I find the set of random portfolios that satisfy the return requirement.
Random portfolios with a mean return above the equal-weight portfolio return in each optimization period:
mean_port <- foreach(i = opt_periods) %do% {
tmp <- PerformanceAnalytics::Mean.arithmetic(x = rp_returns[i, ])
ewp_mean <- mean(x = ewp_return[i, ])
mean_target <- rand_port[which(tmp > ewp_mean),]
}Random portfolios with a median return above the equal-weight portfolio return in each optimization period:
median_port <- foreach(i = opt_periods) %do% {
tmp <- apply(X = rp_returns[i, ], MARGIN = 2, FUN = median)
ewp_median <- median(x = ewp_return[i, ])
median_target <- rand_port[which(tmp > ewp_median),]
}Find the weekly return of the subset of random portfolios in each optimization period:
# Find returns from rp_returns that satisfy the target mean return constraint
meanport_return <- foreach(i = opt_periods) %do% {
tmp <- PerformanceAnalytics::Mean.arithmetic(x = rp_returns[i, ])
ewp_mean <- PerformanceAnalytics::Mean.arithmetic(x = ewp_return[i, ])
mean_target <- rp_returns[i, which(tmp > as.vector(ewp_mean))]
}
# Find returns from rp_returns that satisfy the target median return constraint
medianport_return <- foreach(i = opt_periods) %do% {
tmp <- apply(X = rp_returns[i, ], MARGIN = 2, FUN = median)
ewp_median <- median(x = ewp_return[i, ])
median_target <- rp_returns[i, which(tmp > ewp_median)]
}The steps are similar to Section 7, except I will be using the subset of random portfolio that satisfied the mean return constraint.
Find weights that minimizes variance/standard deviation in each optimization period:
meanvar_weight <- foreach(i = 1:length(meanport_return), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::StdDev(R = meanport_return[[i]])
opt_weight <- mean_port[[i]][which.min(tmp), ]
}
rownames(meanvar_weight) <- paste("OP", 1:nrow(meanvar_weight), sep = "")
data.frame(meanvar_weight)Calculate return based on the selected portfolio weights in the return period:
meanvar_returns <- foreach(i = ret_periods, j = 1:nrow(meanvar_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = meanvar_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MAD in each optimization period:
meanmad_weight <- foreach(i = 1:length(meanport_return), .combine = "rbind") %do% {
tmp <- apply(X = meanport_return[[i]], MARGIN = 2, FUN = function(x) {
mean(abs(x - mean(x)))
})
opt_weight <- mean_port[[i]][which.min(tmp), ]
}
rownames(meanmad_weight) <- paste("OP", 1:nrow(meanmad_weight), sep = "")
data.frame(meanmad_weight)Calculate return based on the selected portfolio weights in the return period:
meanmad_returns <- foreach(i = ret_periods, j = 1:nrow(meanmad_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = meanmad_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MeAD in each optimization period:
meanmead_weight <- foreach(i = 1:length(meanport_return), .combine = "rbind") %do% {
tmp <- apply(X = meanport_return[[i]], MARGIN = 2, FUN = function(x) {
median(abs(x - median(x)))
})
opt_weight <- mean_port[[i]][which.min(tmp), ]
}
rownames(meanmead_weight) <- paste("OP", 1:nrow(meanmead_weight), sep = "")
data.frame(meanmead_weight)Calculate return based on the selected portfolio weights in the return period:
meanmead_returns <- foreach(i = ret_periods, j = 1:nrow(meanmead_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = meanmead_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}The steps are similar to Section 7, except I will be using the subset of random portfolio that satisfied the median return constraint.
Find weights that minimizes variance/standard deviation in each optimization period:
medvar_weight <- foreach(i = 1:length(medianport_return), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::StdDev(R = medianport_return[[i]])
opt_weight <- median_port[[i]][which.min(tmp), ]
}
rownames(medvar_weight) <- paste("OP", 1:nrow(medvar_weight), sep = "")
data.frame(medvar_weight)Calculate return based on the selected portfolio weights in the return period:
medvar_returns <- foreach(i = ret_periods, j = 1:nrow(medvar_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = medvar_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MAD in each optimization period:
medmad_weight <- foreach(i = 1:length(medianport_return), .combine = "rbind") %do% {
tmp <- apply(X = medianport_return[[i]], MARGIN = 2, FUN = function(x) {
mean(abs(x - mean(x)))
})
opt_weight <- median_port[[i]][which.min(tmp), ]
}
rownames(medmad_weight) <- paste("OP", 1:nrow(medmad_weight), sep = "")
data.frame(medmad_weight)Calculate return based on the selected portfolio weights in the return period:
medmad_returns <- foreach(i = ret_periods, j = 1:nrow(medmad_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = medmad_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Find weights that minimizes MeAD in each optimization period:
medmead_weight <- foreach(i = 1:length(medianport_return), .combine = "rbind") %do% {
tmp <- apply(X = medianport_return[[i]], MARGIN = 2, FUN = function(x) {
median(abs(x - median(x)))
})
opt_weight <- median_port[[i]][which.min(tmp), ]
}
rownames(medmead_weight) <- paste("OP", 1:nrow(medmead_weight), sep = "")
data.frame(medmead_weight)Calculate return based on the selected portfolio weights in the return period:
medmead_returns <- foreach(i = ret_periods, j = 1:nrow(medmead_weight), .combine = "rbind") %do% {
tmp <- PerformanceAnalytics::Return.portfolio(R = return_data[i, ],
weights = medmead_weight[j, ],
geometric = TRUE,
rebalance_on = "quarters")
}Plot weights of optimal portfolios:
optport_weights <- list(Mean_Variance = meanvar_weight, Mean_MAD = meanmad_weight, Mean_MeAD = meanmead_weight,
Median_Variance = medvar_weight, Median_MAD = medmad_weight, Median_MeAD = medmead_weight)
for(i in 1:6) {
chart.StackedBar(w = optport_weights[[i]], colorset = RColorBrewer::brewer.pal(n = 10, "Spectral"),
main = paste("Weights of", gsub(pattern = "_", replacement = " ", names(optport_weights)[i]), "Portfolio", sep = " "),
ylab = "Weight")
}Plot cumulative return of optimal portfolios against equal-weight portfolio return:
opt_port <- cbind(meanvar_returns, meanmad_returns, meanmead_returns, medvar_returns, medmad_returns, medmead_returns, ewp_return["2016/"]) %>%
`colnames<-`(c(names(optport_weights), "Equal Weight"))
chart.CumReturns(R = opt_port, geometric = TRUE,
main = "Cumulative Return of Optimal Portfolios",
plot.engine = "plotly")chart.Drawdown(R = opt_port, geometric = TRUE,
main = "Drawdown of Optimal Portfolios",
plot.engine = "plotly")Tables of annualized returns, risk measures and statistics:
table.AnnualizedReturns(R = opt_port, scale = 52, Rf = 0.025/52, geometric = TRUE)table.DownsideRisk(R = opt_port, scale = 52, Rf = 0.025/52, MAR = 0.07/52)table.Stats(R = opt_port)Looking at the annualized performance, the portfolios optimized using median returns seemed to performed worse than the portfolios optimized using mean returns. The only exception would be the Median-MeAD portfolio with comparable annualized return with the Mean-Variance and Mean-MeAD portfolios, although it had slightly higher annualized standard deviation. Annualized Sharpe Ratio was highest for the Mean-Variance portfolio, followed by the Mean-MeAD and the Median-MeAD portfolio. The Median-Variance portfolio was the worst performing, with low annualized return but relatively high annualized standard deviation.
Looking at the table of downside risk measures, the median portfolios generally had higher maximum drawdowns and historical expected shortfall than the mean portfolios.
The median portfolios did not out-performed the mean portfolios based on the sample of random hypothetical portfolios generated for this project.
The results of this project seemed to suggest that when the median return is maximized, a riskier portfolio is chosen as compared to when maximizing the mean return. There is more to study about the effect of median returns in portfolio optimization.
I would like to stress that the results are not generalizable as there were only 10 stocks chosen for this project and the random portfolios generated were just a small subset of an astronomical number of possible permutations. Furthermore, it should not be taken as evidence against the use of the median measure in portfolio optimization. For example, this paper was able to find that median models provided benefits of portfolio diversification and returns.
Frost, J. Mean Absolute Deviation: Definition, Finding & Formula. Statistics By Jim. Retrieved 28 July 2022, from https://statisticsbyjim.com/basics/mean-absolute-deviation/
Wikipedia. (2022). Average absolute deviation. Retrieved 29 July 2022, from https://en.wikipedia.org/wiki/Average_absolute_deviation
Wikipedia. (2022). Median absolute deviation. Retrieved 28 July 2022, from https://en.wikipedia.org/wiki/Median_absolute_deviation